So far we have considered the basic lognormal model for stock prices.
This is perfectly valid as a starting point but in reality stock price movements are not adequately modeled by the lognormal model
Notably they:
When options are 'at the money' these are not great considerations but when an option is severely out of the money then it is very important because:
The lognormal model may predict effectively no chance of returning to in the money and hence value the option at close to zero, whereas in reality the stock price might experience a price jump and so have considerable value.
We look at a number of ways of modeling these additional features within the Monte Carlo framework
The main ones are
We always have a problem with models in that the data we have to fit them has considerable randomness and so there is an ever present danger of over fitting two many parameters
This spreadsheet shows how daily returns on the FTSE100 compares with the lognormal distribution
Jumps in share prices are more formally known as Levy processes
Given that share prices are responding to information then any time when a significant piece of information which might effect a share price becomes known then there is a possibility that a share price may jump and so a simple lognormal model of share price movements may not be appropriate
This is particularly important for out of the money options close to maturity
The are two different types of jumps we need to consider:
What fundamental fact about the modeling of share prices with a lognormal Monte Carlo process can we use to simply introduce price jumps into our analysis?
We are indifferent as to whether we use a sequence of small lognormal steps or one lognormal step for the whole duration of the option
Consequently the incorporation of known jumps into a MC valuation is very simple:
How should the size of the jump be modeled
It is important to note here that for consistency with arbitrage free pricing each jump should have an expected value of 0 under $\mathbb{Q}$
This is very similar in principle to having price jumps at known times with the exception that we now have to model when the jumps occur as well
If we believe jumps are going to arise as a Poisson process how do we model the times of the jumps?
The interval between the jumps will be distributed exponentially
How do we model exponential waiting times?
We use the inverse CDF method that we learnt in chapter 1 as we can integrate the pdf of the exponential distribution
Adapt your option valuation code from the previous chapter to allow for share price jumps with size $N(\mu, \sigma^2)$ which arise at a Poisson rate of $\lambda$
Password protected spreadsheet is here
Clearly there are other ways of including jumps into a share price model.
The important thing to retain though is the no arbitrage condition
In Merton's Jump diffusion Process The governing SDE is
$\frac{dS_t}{S_t} = (r-\lambda \bar k) dt + \sigma dW_t + k dq_t$
The jump event is governed by a compound Poisson process $q_t$ with Poisson rate $\lambda$
and $k$ is the magnitude of the random jump given by: $ln(1+k) \sim N(\gamma, \delta^2)$
so $\bar k = E(k) = e^{\gamma+\delta^2 / 2} - 1$
The key thing to note here though is that the expected return of the process under $\mathbb{Q}$ will still be $e^{r \Delta t}$ because the expected size of the jumps is backed out of the drift component of the diffusion
Sometimes we may notice that share prices not only move in occasional discrete jumps but can become more volatile after a given event.
This reflects the reality of traders trying to settle to a new view of the market as they take on new information
There are many different views we could take about how to model this depending to a large part on how we interpret the real World events that drive it.
Here are a few examples
This is easy to model but very difficult to parameterise as there are now multiple modelling variables that would need to be extracted from random historic share price data
What additional parameters would now be needed?
Adapt your model so that a jump in price is followed by one month of a volatility increase of proportionately the same size as the price jump.
Could you adapt your code so that the period of increased volatility was random
Yes - just generate time intervals from an exponential distribution
Could you adapt your code so that the increased volatility faded away
Not as easily as you would not now have a lognormal distribution from time interval to time interval
This is the Monte Carlo version of the Heston Stochastic Volatility Model
Formally the set up is as follows
It is an observed fact of markets that there are periods when volatility is higher than others
The traditional Black-Scholes model does not pick this up as there is a constant volatility assumption $\sigma$
What market evidence have you seen of this?
Volatility Smile
The Heston Stochastic Volatility Model (HSV) seeks to address this problem by treating volatility itself as its own stochastic process
The original 1993 paper by Heston can be found here
Suppose under a real world probability measure $\mathbb{P}$, the price process $S_t$ of the underlying security is governed by the following diffusion process
$$dS_t = \mu S_t dt + \sigma_t S_t dW_{1t}$$
and the volatility process $\sigma_t$ follows the diffusion process:
$$d \sigma_t = -\alpha \sigma_t dt + \beta dW_{2t}$$
where $\mu$ is the expected rate of return, $W_{1t}$ and $W_{2t}$ are two standard Brownian motions so that:
$$Cov(dW_{1t}, dW_{2t}) = \rho dt$$
where $\rho$ is the constant correlation coefficient between $W_{1t}$ and $W_{2t}$
We also assume the continuously compounded rate of interest is a constant $r$
To make the algebra easier we let: $V_t = \sigma_t^2$
Applying Ito's Lemma to $V_t=\sigma_t^2 = f(\sigma_t)$ where $f(x)=x^2$ gives:
$$dV_t = \kappa(\theta - V_t)dt + \gamma \sqrt{V_t} dW_{2t}$$
where:
Proof
$df(t, X_t) = \left[\frac{\partial f}{\partial t} + \mu(t, X_t) \frac{\partial f}{\partial x} + \frac{1}{2} \sigma^2(t, X_t) \frac{\partial^2 f}{\partial x^2} \right]dt + \sigma(t, X_t) \frac{\partial f}{\partial x} dWt$
where
$dX_t = \mu dt + \sigma dW_t$
But now we have
$d \sigma_t = -\alpha \sigma_t dt + \beta dW_{2t}$
so we re-write Ito's Lemma for $\sigma$ rather than $X$
$df(t, \sigma_t) = \left[\frac{\partial f}{\partial t} + -\alpha \sigma_t \frac{\partial f}{\partial \sigma} + \frac{1}{2} \beta^2 \frac{\partial^2 f}{\partial \sigma^2} \right]dt + \beta \frac{\partial f}{\partial \sigma} dWt$
We want to get to $dV_t$ and $V_t = \sigma_t^2$ so we use $f(x)=x^2$
So $\frac{\partial f}{\partial t}=0$, $\frac{\partial f}{\partial x}=2x$ and $\frac{\partial^2 f}{\partial x^2}=2$
So Ito's Lemma becomes
$dV_t = d\sigma^2_t = df(\sigma_t) = \left[0 - \alpha \sigma_t . 2 \sigma_t + \frac{1}{2} \beta^2 . 2\right]dt + \beta.2\sigma_t.dWt$
$dV_t = \left[\beta^2 - 2 \alpha \sigma_t^2\right]dt + 2\beta\sigma_t.dWt$
$dV_t = 2\alpha\left[\frac{\beta^2}{2\alpha} - \sigma_t^2\right]dt + 2\beta\sigma_t.dWt$
$dV_t = 2\alpha\left[\frac{\beta^2}{2\alpha} - V_t\right]dt + 2\beta \sqrt{V_t}.dWt$
Defining $x_t = ln S_t$, we see that by Ito's Lemma
$$dx_t = \left(\mu - \frac{1}{2} V_t \right) dt + \sqrt{V_t} dW_{1t}$$
The Cholesky Decomposition gives us that:
$$W_{2t}=\rho W_{1t} + \sqrt{1-\rho ^2} W_{0t}$$
where $W_{0t}$ and $W_{1t}$ are two independent Brownian motions under $P$. This gives us the correlation of $\rho$ between $W_{1t}$ and $W_{2t}$
Check
$cov(W_{1t}, W_{2t}) = cov(W_{1t}, \rho W_{1t} + \sqrt{1-\rho ^2} W_{0t})$,
$cov(W_{1t}, W_{2t}) = cov(W_{1t}, \rho W_{1t}) + cov(W_{1t}, \sqrt{1-\rho ^2} W_{0t})$,
$cov(W_{1t}, W_{2t}) = \rho \times var(W_{1t}) + \sqrt{1-\rho ^2} \times cov(W_{1t}, W_{0t})$,
$cov(W_{1t}, W_{2t}) = \rho.dt$
and $corr(W_{1t}, W_{2t}) = \frac{cov(W_{1t}, W_{2t})}{\sqrt{var(W_{1t}).var(W_{2t})}} = \frac{\rho.dt}{dt} = \rho$
We can now rewrite the HSV as follows:
$$dx_t = \left(\mu - \frac{1}{2} V_t \right) dt + \sqrt{V_t} dW_{1t}$$
$$dV_t = \kappa (\theta - V_t) dt + \rho \gamma \sqrt{V_t} dW_{1t} + \sqrt{1-\rho^2} \gamma \sqrt{V_t} dW_{0t}$$
So far this is all well and good. Now we need to map our stochastic volatility process into $\mathbb{Q}$ world
Let us start with a Cameron Martin Girsanov review
If $W_t$ is a S.B.M. under $P$ and $\gamma_t$ is a previsible process then:
$\overset{\sim}{W_t} = W_t + \displaystyle\int_{0}^{t} \gamma_s ds$ is a S.B.M under $\mathbb{Q}$ where $\mathbb{Q}$ is defined by:
$\frac{d\mathbb{Q}}{d\mathbb{P}}=exp\left(-\displaystyle\int_{0}^{T} \gamma_t dW_t - \frac{1}{2} \displaystyle\int_{0}^{T} \gamma_t^2 dt \right)$
This is far more powerful than the special case we actually need. In practice what we do is to chose the $\gamma_t$ to take out the market price of risk and so $\mathbb{Q}$ becomes the risk-neutral probability measure
We start with the following inspired definition:
$\eta _t = \left(\frac{\gamma \rho(\mu - r) + \lambda(t, S_t, V_t)}{\gamma \sqrt{1-\rho^2} \sqrt{V_t}}, \frac{\mu-r}{\sqrt{V_t}} \right)' \in R^2$
where $\lambda(t,S_t, V_t)$ is the market price of volatility risk
we now write $\textbf{W}_t = (W_{0t}, W_{1t})' \in R^2$ for convenience and define $\frac{d\mathbb{Q}}{d\mathbb{P}}$ by
$\frac{d\mathbb{Q}}{d\mathbb{P}} = \exp \left(\displaystyle\int_0^t \eta^{'}_{u} d\textbf{W}_u - \frac{1}{2} \displaystyle\int_0^t ||\eta_u||^2 du \right)$
So by Girsanov's Theorem, the process $\textbf{W}^*_t$ defined by:
$$\textbf{W}^*_t=\textbf{W}_t - \displaystyle\int_0^t \eta _u du$$
is a standard Brownian motion under $\mathbb{Q}$
It may be helpful to write out $\textbf{W}^*_t$ longhand at this point to have a more detailed look at this vector
$W^*_{0t} = W_{0t} + \displaystyle \int_0^t \frac{\lambda(u, S_u, V_u)}{\gamma\sqrt{1-\rho^2}\sqrt{V_u}}du + \displaystyle \int_0^t \frac{\rho(\mu - r)}{\sqrt{1-\rho^2}\sqrt{V_u}}du$
$W^*_{1t} = W_{1t} + \displaystyle \int_0^t \frac{\mu-r}{\sqrt{V_u}}du$
So $W_{0t}^*$ and $W_{1t}^*$ are two independent standard Brownian motions under $\mathbb{Q}$
This brings us to the question of what is $\lambda$. Clearly $\lambda$ is a parameter which allows us to adjust for the market's attitude to the uncertainty around the volatility of the volatility
Unlike the first order market price of risk there is no arbitrage free construction of what $\lambda$ must be and so we have to observe this value from the market
Work by Breeden (1979) suggests that $\lambda(t, S_t, V_t)$ is proportional to $V_t$ and hence equal to $\lambda V_t$ where $\lambda$ is a constant so we will proceed on that basis from here
Ultimately this is a matter of fitting a parameter to the data, This analysis by PIMCO suggests the parameter fitting will be fairly stable
Then under $\mathbb{Q}$, the risk neutral dynamics of the logarithmic price process and the variance process are:
$dx_t=\left(r-\frac{1}{2} V_t\right) dt + \sqrt{V_t} dW_{1t}^*$
Check
$dx_t = (\mu - \frac{1}{2} V_t) dt + \sqrt{V_t} dW_{1t}$
$W_{1t}=W_{1t}^{*} - \displaystyle\int_0^t \frac{\mu-r}{\sqrt{V_u}} du$
$dW_{1t}=dW_{1t}^{*} - \frac{\mu-r}{\sqrt{V_t}}$
$dx_t = (\mu - \frac{1}{2} V_t) dt + \sqrt{V_t} \left(dW_{1t}^{*} - \frac{\mu-r}{\sqrt{V_t}} \right)$
$dx_t = (r - \frac{1}{2} V_t) dt + \sqrt{V_t} dW_{1t}^{*} $
$dV_t = \kappa^* (\theta^* - V_t) dt + \rho \gamma \sqrt{V_t}dW^*_{1t} + \sqrt{1-\rho^2} \gamma \sqrt{V_t} dW^*_{0t}$
where
$\kappa^*=\kappa + \lambda$ and
$\theta^*=\frac{\kappa \theta}{\kappa + \lambda}$
so $\kappa^*$ and $\theta^*$ are risk neutral model parameters
The check for this line would work the same way as the $dx_t$ check but is simply a little more complicated
We now define the process $W_{2t}^{*}$ by putting:
$W_{2t}^{*}=\rho W_{1t}^{*}+\sqrt{1-\rho ^2} W_{0t}^{*}$
It can be shown that $W_{2t}^{*}$ is a Brownian motion under $\mathbb{Q}$ (fairly trivial as $W_{0t}^{*}$ and $W_{1t}^{*}$ are) and that: $Cov(dW_{1t}^{*},dW_{2t}^{*})=\rho dt$ under $\mathbb{Q}$
So we can re-write the price and variance dynamics as:
$$dS_t=rS_t dt + \sqrt{V_t} S_t dW_{1t}^{*}$$
$$dV_t=\kappa ^{*}(\theta ^{*} - V_t) dt + \gamma \sqrt{V_t} dW_{2t}^{*}$$
In summary, so far we have managed to define our simultaneous stochastic differential equations under $\mathbb{P}$ and then we have recalculated them under $\mathbb{Q}$. You will notice the $\mu$ in $\mathbb{P}$ world has become a $r$ in $\mathbb{Q}$ world in particular
we now have two options for valuing an option:
We will look at the Monte Carlo option in this course
Stochastic volatility lends itself very well to Monte Carlo methods as the complexity makes it very difficult (although not impossible) to do via an analytical solution
Set up a function to take the parameters for the HSV model
Write code to generate thousands of values of $W_{1t}$
Using the Cholesky Decomposition write code to generate thousands of values of $W_{2t}$
The formulas above are under the real-world probability measure $\mathbb{P}$
what do you need to do next to allow you to create arbitrage free prices for your options
Change to a risk neutral probability measure: i.e. replace $\mu$ with $r$
Extend your function to create a single price path using a small time increment $dt$
Extend your function to create many prices paths under $\mathbb{Q}$
Extend your function to calculate the risk neutral value of a European call option
Extend your function to calculate the value of a European call or put option
The graphical Computer program to illustrate these concepts InterestProj is here
The multiple state volatility model is a simple way of producing an observed feature of share prices.
What observed feature do you think this is
Excess Kurtosis
This is because we can use a multiple state model to simulate periods of high volatility and periods of low volatility, therefore our overall share price movements will have more small movements and more very large movements than would be predicted by a simple lognormal model
Extend your model to allow for a two volatility state solution
Assume that the changes between the states are separated by exponentially distributed lengths of time
Can you now adjust the additional parameters you have $\sigma_1$, $\sigma_2$ $\lambda_1$ and $\lambda_2$ (the waiting time rates) to recreate the distribution of returns we can observe from the actual FTSE100 data